classify chemistry text
GAPrune: Gradient-Alignment Pruning for Domain-Aware Embeddings
Yixuan T ang Yi Y ang The Hong Kong University of Science and Technology ytangch@connect.ust.hk, Domain-specific embedding models have shown promise for applications that require specialized semantic understanding, such as coding agents and financial retrieval systems, often achieving higher performance gains than general models. However, state-of-the-art embedding models are typically based on LLMs, which contain billions of parameters, making deployment challenging in resource-constrained environments. Model compression through pruning offers a promising solution, but existing pruning methods treat all parameters uniformly, failing to distinguish between general semantic representations and domain-specific patterns, leading to suboptimal pruning decisions. Thus, we propose GAPrune, a pruning framework that addresses this challenge by considering both domain importance and preserving general linguistic foundation. Our method uses Fisher Information to measure importance and general-domain gradient alignment to assess parameter behavior, then combines these signals using our Domain Alignment Importance (DAI) scoring. Lower DAI scores indicate that the parameter is either less important for the domain task or creates conflicts between domain and general objectives. Experiments on two domain benchmarks, FinMTEB and ChemTEB, show that GAPrune maintains performance within 2.5% of dense models in one-shot pruning at 50% sparsity, while outperforming all baselines. With retraining in 100 steps, GAPrune achieves +4.51% improvement on FinMTEB and +1.73% on ChemTEB, demonstrating that our pruning strategy not only preserves but enhances domain-specific capabilities. The deployment of large language models in specialized domains has revealed a critical challenge: while general-purpose models excel at broad language understanding, they often fail to capture domain-specific semantics crucial for real-world applications (Gu et al., 2021; Y ao et al., 2024). This semantic gap is evident for embedding models, where precise representation of domain-specific concepts directly impacts downstream task performance.
- Asia > China > Hong Kong (0.24)
- North America > United States > Texas > Travis County > Austin (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Health & Medicine (0.68)
- Banking & Finance (0.68)